53 research outputs found
Unit Testing Challenges with Automated Marking
Teaching software testing presents difficulties due to its abstract and
conceptual nature. The lack of tangible outcomes and limited emphasis on
hands-on experience further compound the challenge, often leading to
difficulties in comprehension for students. This can result in waning
engagement and diminishing motivation over time. In this paper, we introduce
online unit testing challenges with automated marking as a learning tool via
the EdStem platform to enhance students' software testing skills and
understanding of software testing concepts. Then, we conducted a survey to
investigate the impact of the unit testing challenges with automated marking on
student learning. The results from 92 participants showed that our unit testing
challenges have kept students more engaged and motivated, fostering deeper
understanding and learning, while the automated marking mechanism enhanced
students' learning progress, helping them to understand their mistakes and
misconceptions quicker than traditional-style human-written manual feedback.
Consequently, these results inform educators that the online unit testing
challenges with automated marking improve overall student learning experience,
and are an effective pedagogical practice in software testing.Comment: 5 pages, accepted at the 30th Asia-Pacific Software Engineering
Conference (APSEC 2023
Syntax-Aware On-the-Fly Code Completion
Code completion aims to help improve developers' productivity by suggesting
the next code tokens from a given context. Various approaches have been
proposed to incorporate abstract syntax tree (AST) information for model
training, ensuring that code completion is aware of the syntax of the
programming languages. However, existing syntax-aware code completion
approaches are not on-the-fly, as we found that for every two-thirds of
characters that developers type, AST fails to be extracted because it requires
the syntactically correct source code, limiting its practicality in real-world
scenarios. On the other hand, existing on-the-fly code completion does not
consider syntactic information yet. In this paper, we propose PyCoder to
leverage token types, a kind of lightweight syntactic information, which is
readily available and aligns with the natural order of source code. Our PyCoder
is trained in a multi-task training manner so that by learning the supporting
task of predicting token types during the training phase, the models achieve
better performance on predicting tokens and lines of code without the need for
token types in the inference phase. Comprehensive experiments show that PyCoder
achieves the first rank on the CodeXGLUE leaderboard with an accuracy of 77.12%
for the token-level predictions, which is 0.43%-24.25% more accurate than
baselines. In addition, PyCoder achieves an exact match of 43.37% for the
line-level predictions, which is 3.63%-84.73% more accurate than baselines.
These results lead us to conclude that token type information (an alternative
to syntactic information) that is rarely used in the past can greatly improve
the performance of code completion approaches, without requiring the
syntactically correct source code like AST-based approaches do. Our PyCoder is
publicly available on HuggingFace.Comment: 14 pages, Under Review at IEEE Transactions on Software Engineerin
Ethics in AI through the Developer's View: A Grounded Theory Literature Review
The term ethics is widely used, explored, and debated in the context of
developing Artificial Intelligence (AI) based software systems. In recent
years, numerous incidents have raised the profile of ethical issues in AI
development and led to public concerns about the proliferation of AI technology
in our everyday lives. But what do we know about the views and experiences of
those who develop these systems: the AI developers? We conducted a grounded
theory literature review (GTLR) of 38 primary empirical studies that included
AI developers' views on ethics in AI and analysed them to derive five
categories - developer awareness, perception, need, challenge, and approach.
These are underpinned by multiple codes and concepts that we explain with
evidence from the included studies. We present a taxonomy of ethics in AI from
developers' viewpoints to assist AI developers in identifying and understanding
the different aspects of AI ethics. The taxonomy provides a landscape view of
the key aspects that concern AI developers when it comes to ethics in AI. We
also share an agenda for future research studies and recommendations for
developers, managers, and organisations to help in their efforts to better
consider and implement ethics in AI.Comment: 40 pages, 5 figures, 4 table
- …